# Incremental Pre-training

Vietnamese Llama2 7b 40GB
Other
A Vietnamese-optimized model based on Llama2-chat 7B, significantly improving Vietnamese language processing through incremental pre-training and an efficient tokenizer
Large Language Model Transformers Supports Multiple Languages
V
bkai-foundation-models
23
40
Zhilu 13B Instruct
Apache-2.0
ZhiLu is a financial large language model developed based on Chinese Alpaca2-13B, achieving capability leaps through massive incremental pre-training with Chinese and English corpora and high-quality instruction data alignment, with a focus on enhancing performance in the financial domain.
Large Language Model Transformers
Z
SYSU-MUCFC-FinTech-Research-Center
26
3
Codes 7b
Apache-2.0
CodeS-7B is a large language model optimized for SQL generation, based on incremental pre-training of StarCoderBase-7B, supporting a maximum context length of 8,192 tokens.
Large Language Model Transformers Other
C
seeklhy
409
8
Ziya LLaMA 13B Pretrain V1
Gpl-3.0
A large-scale pre-trained model with 13 billion parameters based on the LLaMa architecture, optimized for Chinese tokenization, completing 110 billion tokens of incremental pre-training in Chinese and English, significantly improving Chinese generation and comprehension capabilities
Large Language Model Transformers Supports Multiple Languages
Z
IDEA-CCNL
113
20
Vbert 2021 Base
Apache-2.0
VMware's BERT base model optimized for technical domains, enhanced with incremental pre-training to improve the handling of proprietary terminology
Large Language Model Transformers English
V
VMware
14
4
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase